Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Brief Bioinform ; 25(2)2024 Jan 22.
Artículo en Inglés | MEDLINE | ID: mdl-38483256

RESUMEN

Numerous imaging techniques are available for observing and interrogating biological samples, and several of them can be used consecutively to enable correlative analysis of different image modalities with varying resolutions and the inclusion of structural or molecular information. Achieving accurate registration of multimodal images is essential for the correlative analysis process, but it remains a challenging computer vision task with no widely accepted solution. Moreover, supervised registration methods require annotated data produced by experts, which is limited. To address this challenge, we propose a general unsupervised pipeline for multimodal image registration using deep learning. We provide a comprehensive evaluation of the proposed pipeline versus the current state-of-the-art image registration and style transfer methods on four types of biological problems utilizing different microscopy modalities. We found that style transfer of modality domains paired with fully unsupervised training leads to comparable image registration accuracy to supervised methods and, most importantly, does not require human intervention.


Asunto(s)
Aprendizaje Profundo , Humanos , Microscopía
2.
Nat Commun ; 15(1): 1594, 2024 Feb 21.
Artículo en Inglés | MEDLINE | ID: mdl-38383513

RESUMEN

Measuring the phenotypic effect of treatments on cells through imaging assays is an efficient and powerful way of studying cell biology, and requires computational methods for transforming images into quantitative data. Here, we present an improved strategy for learning representations of treatment effects from high-throughput imaging, following a causal interpretation. We use weakly supervised learning for modeling associations between images and treatments, and show that it encodes both confounding factors and phenotypic features in the learned representation. To facilitate their separation, we constructed a large training dataset with images from five different studies to maximize experimental diversity, following insights from our causal analysis. Training a model with this dataset successfully improves downstream performance, and produces a reusable convolutional network for image-based profiling, which we call Cell Painting CNN. We evaluated our strategy on three publicly available Cell Painting datasets, and observed that the Cell Painting CNN improves performance in downstream analysis up to 30% with respect to classical features, while also being more computationally efficient.


Asunto(s)
Redes Neurales de la Computación
3.
bioRxiv ; 2023 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-37398158

RESUMEN

Accurately quantifying cellular morphology at scale could substantially empower existing single-cell approaches. However, measuring cell morphology remains an active field of research, which has inspired multiple computer vision algorithms over the years. Here, we show that DINO, a vision-transformer based, self-supervised algorithm, has a remarkable ability for learning rich representations of cellular morphology without manual annotations or any other type of supervision. We evaluate DINO on a wide variety of tasks across three publicly available imaging datasets of diverse specifications and biological focus. We find that DINO encodes meaningful features of cellular morphology at multiple scales, from subcellular and single-cell resolution, to multi-cellular and aggregated experimental groups. Importantly, DINO successfully uncovers a hierarchy of biological and technical factors of variation in imaging datasets. The results show that DINO can support the study of unknown biological variation, including single-cell heterogeneity and relationships between samples, making it an excellent tool for image-based biological discovery.

4.
Nat Commun ; 14(1): 1967, 2023 04 08.
Artículo en Inglés | MEDLINE | ID: mdl-37031208

RESUMEN

Predicting assay results for compounds virtually using chemical structures and phenotypic profiles has the potential to reduce the time and resources of screens for drug discovery. Here, we evaluate the relative strength of three high-throughput data sources-chemical structures, imaging (Cell Painting), and gene-expression profiles (L1000)-to predict compound bioactivity using a historical collection of 16,170 compounds tested in 270 assays for a total of 585,439 readouts. All three data modalities can predict compound activity for 6-10% of assays, and in combination they predict 21% of assays with high accuracy, which is a 2 to 3 times higher success rate than using a single modality alone. In practice, the accuracy of predictors could be lower and still be useful, increasing the assays that can be predicted from 37% with chemical structures alone up to 64% when combined with phenotypic data. Our study shows that unbiased phenotypic profiling can be leveraged to enhance compound bioactivity prediction to accelerate the early stages of the drug-discovery process.


Asunto(s)
Descubrimiento de Drogas , Transcriptoma , Descubrimiento de Drogas/métodos , Bioensayo , Ensayos Analíticos de Alto Rendimiento/métodos
5.
Trends Cell Biol ; 32(4): 295-310, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35067424

RESUMEN

Single nucleus segmentation is a frequent challenge of microscopy image processing, since it is the first step of many quantitative data analysis pipelines. The quality of tracking single cells, extracting features or classifying cellular phenotypes strongly depends on segmentation accuracy. Worldwide competitions have been held, aiming to improve segmentation, and recent years have definitely brought significant improvements: large annotated datasets are now freely available, several 2D segmentation strategies have been extended to 3D, and deep learning approaches have increased accuracy. However, even today, no generally accepted solution and benchmarking platform exist. We review the most recent single-cell segmentation tools, and provide an interactive method browser to select the most appropriate solution.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Microscopía , Núcleo Celular , Humanos , Procesamiento de Imagen Asistido por Computador/normas , Microscopía/métodos , Microscopía/tendencias , Análisis de la Célula Individual/métodos
6.
Sci Rep ; 11(1): 14813, 2021 07 20.
Artículo en Inglés | MEDLINE | ID: mdl-34285291

RESUMEN

Recent statistics report that more than 3.7 million new cases of cancer occur in Europe yearly, and the disease accounts for approximately 20% of all deaths. High-throughput screening of cancer cell cultures has dominated the search for novel, effective anticancer therapies in the past decades. Recently, functional assays with patient-derived ex vivo 3D cell culture have gained importance for drug discovery and precision medicine. We recently evaluated the major advancements and needs for the 3D cell culture screening, and concluded that strictly standardized and robust sample preparation is the most desired development. Here we propose an artificial intelligence-guided low-cost 3D cell culture delivery system. It consists of a light microscope, a micromanipulator, a syringe pump, and a controller computer. The system performs morphology-based feature analysis on spheroids and can select uniform sized or shaped spheroids to transfer them between various sample holders. It can select the samples from standard sample holders, including Petri dishes and microwell plates, and then transfer them to a variety of holders up to 384 well plates. The device performs reliable semi- and fully automated spheroid transfer. This results in highly controlled experimental conditions and eliminates non-trivial side effects of sample variability that is a key aspect towards next-generation precision medicine.


Asunto(s)
Técnicas de Cultivo de Célula/instrumentación , Neoplasias/patología , Esferoides Celulares/citología , Inteligencia Artificial , Línea Celular Tumoral , Aprendizaje Profundo , Ensayos de Selección de Medicamentos Antitumorales , Ensayos Analíticos de Alto Rendimiento , Humanos , Neoplasias/tratamiento farmacológico , Medicina de Precisión , Esferoides Celulares/efectos de los fármacos , Esferoides Celulares/patología
8.
PeerJ ; 9: e12502, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35003914

RESUMEN

SUMMARY: We developed PyLAE, a new tool for determining local ancestry along a genome using whole-genome sequencing data or high-density genotyping experiments. PyLAE can process an arbitrarily large number of ancestral populations (with or without an informative prior). Since PyLAE does not involve estimating many parameters, it can process thousands of genomes within a day. PyLAE can run on phased or unphased genomic data. We have shown how PyLAE can be applied to the identification of differentially enriched pathways between populations. The local ancestry approach results in higher enrichment scores compared to whole-genome approaches. We benchmarked PyLAE using the 1000 Genomes dataset, comparing the aggregated predictions with the global admixture results and the current gold standard program RFMix. Computational efficiency, minimal requirements for data pre-processing, straightforward presentation of results, and ease of installation make PyLAE a valuable tool to study admixed populations. AVAILABILITY AND IMPLEMENTATION: The source code and installation manual are available at https://github.com/smetam/pylae.

9.
Comput Struct Biotechnol J ; 18: 1287-1300, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32612752

RESUMEN

Today, we are fully immersed into the era of 3D biology. It has been extensively demonstrated that 3D models: (a) better mimic the physiology of human tissues; (b) can effectively replace animal models; (c) often provide more reliable results than 2D ones. Accordingly, anti-cancer drug screenings and toxicology studies based on multicellular 3D biological models, the so-called "-oids" (e.g. spheroids, tumoroids, organoids), are blooming in the literature. However, the complex nature of these systems limit the manual quantitative analyses of single cells' behaviour in the culture. Accordingly, the demand for advanced software tools that are able to perform phenotypic analysis is fundamental. In this work, we describe the freely accessible tools that are currently available for biologists and researchers interested in analysing the effects of drugs/treatments on 3D multicellular -oids at a single-cell resolution level. In addition, using publicly available nuclear stained datasets we quantitatively compare the segmentation performance of 9 specific tools.

10.
Mol Biol Cell ; 31(20): 2179-2186, 2020 09 15.
Artículo en Inglés | MEDLINE | ID: mdl-32697683

RESUMEN

AnnotatorJ combines single-cell identification with deep learning (DL) and manual annotation. Cellular analysis quality depends on accurate and reliable detection and segmentation of cells so that the subsequent steps of analyses, for example, expression measurements, may be carried out precisely and without bias. DL has recently become a popular way of segmenting cells, performing unimaginably better than conventional methods. However, such DL applications may be trained on a large amount of annotated data to be able to match the highest expectations. High-quality annotations are unfortunately expensive as they require field experts to create them, and often cannot be shared outside the lab due to medical regulations. We propose AnnotatorJ, an ImageJ plugin for the semiautomatic annotation of cells (or generally, objects of interest) on (not only) microscopy images in 2D that helps find the true contour of individual objects by applying U-Net-based presegmentation. The manual labor of hand annotating cells can be significantly accelerated by using our tool. Thus, it enables users to create such datasets that could potentially increase the accuracy of state-of-the-art solutions, DL or otherwise, when used as training data.


Asunto(s)
Curaduría de Datos/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Profundo , Programas Informáticos
11.
Sci Rep ; 10(1): 5068, 2020 03 19.
Artículo en Inglés | MEDLINE | ID: mdl-32193485

RESUMEN

Recent advancements in deep learning have revolutionized the way microscopy images of cells are processed. Deep learning network architectures have a large number of parameters, thus, in order to reach high accuracy, they require a massive amount of annotated data. A common way of improving accuracy builds on the artificial increase of the training set by using different augmentation techniques. A less common way relies on test-time augmentation (TTA) which yields transformed versions of the image for prediction and the results are merged. In this paper we describe how we have incorporated the test-time argumentation prediction method into two major segmentation approaches utilized in the single-cell analysis of microscopy images. These approaches are semantic segmentation based on the U-Net, and instance segmentation based on the Mask R-CNN models. Our findings show that even if only simple test-time augmentations (such as rotation or flipping and proper merging methods) are applied, TTA can significantly improve prediction accuracy. We have utilized images of tissue and cell cultures from the Data Science Bowl (DSB) 2018 nuclei segmentation competition and other sources. Additionally, boosting the highest-scoring method of the DSB with TTA, we could further improve prediction accuracy, and our method has reached an ever-best score at the DSB.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...